147 research outputs found

    Ethical Issues in Engineering Models: An Operations Researcher’s Reflections

    Get PDF
    This article starts with an overview of the author’s personal involvement—as an Operations Research consultant—in several engineering case-studies that may raise ethical questions; e.g., case-studies on nuclear waste, water management, sustainable ecology, military tactics, and animal welfare. All these case studies employ computer simulation models. In general, models are meant to solve practical problems, which may have ethical implications for the various stakeholders; namely, the modelers, the clients, and the public at large. The article further presents an overview of codes of ethics in a variety of disciples. It discusses the role of mathematical models, focusing on the validation of these models’ assumptions. Documentation of these model assumptions needs special attention. Some ethical norms and values may be quantified through the model’s multiple performance measures, which might be optimized. The uncertainty about the validity of the model leads to risk or uncertainty analysis and to a search for robust models. Ethical questions may be pressing in military models, including war games. However, computer games and the related experimental economics may also provide a special tool to study ethical issues. Finally, the article briefly discusses whistleblowing. Its many references to publications and websites enable further study of ethical issues in modeling

    A novel framework for validating and applying standardized small area measurement strategies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Local measurements of health behaviors, diseases, and use of health services are critical inputs into local, state, and national decision-making. Small area measurement methods can deliver more precise and accurate local-level information than direct estimates from surveys or administrative records, where sample sizes are often too small to yield acceptable standard errors. However, small area measurement requires careful validation using approaches other than conventional statistical methods such as in-sample or cross-validation methods because they do not solve the problem of validating estimates in data-sparse domains.</p> <p>Methods</p> <p>A new general framework for small area estimation and validation is developed and applied to estimate Type 2 diabetes prevalence in US counties using data from the Behavioral Risk Factor Surveillance System (BRFSS). The framework combines the three conventional approaches to small area measurement: (1) pooling data across time by combining multiple survey years; (2) exploiting spatial correlation by including a spatial component; and (3) utilizing structured relationships between the outcome variable and domain-specific covariates to define four increasingly complex model types - coined the Naive, Geospatial, Covariate, and Full models. The validation framework uses direct estimates of prevalence in large domains as the gold standard and compares model estimates against it using (i) all available observations for the large domains and (ii) systematically reduced sample sizes obtained through random sampling with replacement. At each sampling level, the model is rerun repeatedly, and the validity of the model estimates from the four model types is then determined by calculating the (average) concordance correlation coefficient (CCC) and (average) root mean squared error (RMSE) against the gold standard. The CCC is closely related to the intraclass correlation coefficient and can be used when the units are organized in groups and when it is of interest to measure the agreement between units in the same group (e.g., counties). The RMSE is often used to measure the differences between values predicted by a model or an estimator and the actually observed values. It is a useful measure to capture the precision of the model or estimator.</p> <p>Results</p> <p>All model types have substantially higher CCC and lower RMSE than the direct, single-year BRFSS estimates. In addition, the inclusion of relevant domain-specific covariates generally improves predictive validity, especially at small sample sizes, and their leverage can be equivalent to a five- to tenfold increase in sample size.</p> <p>Conclusions</p> <p>Small area estimation of important health outcomes and risk factors can be improved using a systematic modeling and validation framework, which consistently outperformed single-year direct survey estimates and demonstrated the potential leverage of including relevant domain-specific covariates compared to pure measurement models. The proposed validation strategy can be applied to other disease outcomes and risk factors in the US as well as to resource-scarce situations, including low-income countries. These estimates are needed by public health officials to identify at-risk groups, to design targeted prevention and intervention programs, and to monitor and evaluate results over time.</p

    Design of Experiments for Screening

    Full text link
    The aim of this paper is to review methods of designing screening experiments, ranging from designs originally developed for physical experiments to those especially tailored to experiments on numerical models. The strengths and weaknesses of the various designs for screening variables in numerical models are discussed. First, classes of factorial designs for experiments to estimate main effects and interactions through a linear statistical model are described, specifically regular and nonregular fractional factorial designs, supersaturated designs and systematic fractional replicate designs. Generic issues of aliasing, bias and cancellation of factorial effects are discussed. Second, group screening experiments are considered including factorial group screening and sequential bifurcation. Third, random sampling plans are discussed including Latin hypercube sampling and sampling plans to estimate elementary effects. Fourth, a variety of modelling methods commonly employed with screening designs are briefly described. Finally, a novel study demonstrates six screening methods on two frequently-used exemplars, and their performances are compared

    Improving the Efficiency of Physical Examination Services

    Get PDF
    The objective of our project was to improve the efficiency of the physical examination screening service of a large hospital system. We began with a detailed simulation model to explore the relationships between four performance measures and three decision factors. We then attempted to identify the optimal physician inquiry starting time by solving a goal-programming problem, where the objective function includes multiple goals. One of our simulation results shows that the proposed optimal physician inquiry starting time decreased patient wait times by 50% without increasing overall physician utilization

    Evaluation of management information systems

    No full text
    The economic evaluation of Management Information Systems may be based on the following theories and techniques: Control Theory, System Dynamics, (discrete-event) simulation, and gaming. Applications of these approaches are summarized. Advantages and disadvantages of the various approaches are presented.

    Application-driven sequential designs for simulation experiments: Kriging metamodeling

    Get PDF
    This paper proposes a novel method to select an experimental design for interpolation in simulation. Although the paper focuses on Kriging in deterministic simulation, the method also applies to other types of metamodels (besides Kriging), and to stochastic simulation. The paper focuses on simulations that require much computer time, so it is important to select a design with a small number of observations. The proposed method is therefore sequential. The novelty of the method is that it accounts for the specific input/output function of the particular simulation model at hand; that is, the method is application-driven or customized. This customization is achieved through cross-validation and jackknifing. The new method is tested through two academic applications, which demonstrate that the method indeed gives better results than either sequential designs based on an approximate Kriging prediction variance formula or designs with prefixed sample sizes
    corecore